1,573 research outputs found

    The CONEstrip algorithm

    Get PDF
    Uncertainty models such as sets of desirable gambles and (conditional) lower previsions can be represented as convex cones. Checking the consistency of and drawing inferences from such models requires solving feasibility and optimization problems. We consider finitely generated such models. For closed cones, we can use linear programming; for conditional lower prevision-based cones, there is an efficient algorithm using an iteration of linear programs. We present an efficient algorithm for general cones that also uses an iteration of linear programs

    A new method for learning imprecise hidden Markov models

    Get PDF
    We present a method for learning imprecise local uncertainty models in stationary hidden Markov models. If there is enough data to justify precise local uncertainty models, then existing learning algorithms, such as the Baum–Welch algorithm, can be used. When there is not enough evidence to justify precise models, the method we suggest here has a number of interesting features

    Sets of Priors Reflecting Prior-Data Conflict and Agreement

    Full text link
    In Bayesian statistics, the choice of prior distribution is often debatable, especially if prior knowledge is limited or data are scarce. In imprecise probability, sets of priors are used to accurately model and reflect prior knowledge. This has the advantage that prior-data conflict sensitivity can be modelled: Ranges of posterior inferences should be larger when prior and data are in conflict. We propose a new method for generating prior sets which, in addition to prior-data conflict sensitivity, allows to reflect strong prior-data agreement by decreased posterior imprecision.Comment: 12 pages, 6 figures, In: Paulo Joao Carvalho et al. (eds.), IPMU 2016: Proceedings of the 16th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Eindhoven, The Netherland

    Factorisation properties of the strong product

    Get PDF
    We investigate a number of factorisation conditions in the frame- work of sets of probability measures, or coherent lower previsions, with finite referential spaces. We show that the so-called strong product constitutes one way to combine a number of marginal coherent lower previsions into an independent joint lower prevision, and we prove that under some conditions it is the only independent product that satisfies the factorisation conditions

    Maximin and maximal solutions for linear programming problems with possibilistic uncertainty

    Get PDF
    We consider linear programming problems with uncertain constraint coefficients described by intervals or, more generally, possi-bility distributions. The uncertainty is given a behavioral interpretation using coherent lower previsions from the theory of imprecise probabilities. We give a meaning to the linear programming problems by reformulating them as decision problems under such imprecise-probabilistic uncer-tainty. We provide expressions for and illustrations of the maximin and maximal solutions of these decision problems and present computational approaches for dealing with them

    Robust Inference of Trees

    Full text link
    This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m^4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.Comment: 26 pages, 7 figure
    corecore